2,840 research outputs found

    Development of an injectable composite for bone regeneration

    Get PDF
    With the development of minimally invasive surgical techniques, there is a growing interest in the research and development of injectable biomaterials especially for orthopedic applications. In a view to enhance the overall surgery benefits for the patient, the BIOSINJECT project aims at preparing a new generation of mineral-organic composites for bone regeneration exhibiting bioactivity, therapeutic activity and easiness of use to broaden the application domains of the actual bone mineral cements and propose an alternative strategy with regard to their poor resorbability, injectability difficulties and risk of infection. First, a physical-chemical study demonstrated the feasibility of self-setting injectable composites associating calcium carbonate-calcium phosphate cement and polysaccharides (tailor-made or commercial polymer) in the presence or not of an antibacterial agent within the composite formulation. Then, bone cell response and antimicrobial activity of the composite have been evaluated in vitro. Finally, in order to evaluate resorption rate and bone tissue response an animal study has been performed and the histological analysis is still in progress. These multidisciplinary and complementary studies led to promising results in a view of the industrial development of such composite for dental and orthopaedic applications

    Circulatory contributors to the phenotype in hereditary hemorrhagic telangiectasia

    Get PDF
    Hereditary hemorrhagic telangiectasia (HHT) is mechanistically and therapeutically challenging, not only because of the molecular and cellular perturbations that generate vascular abnormalities, but also the modifications to circulatory physiology that result, and are likely to exacerbate vascular injury. First, most HHT patients have visceral arteriovenous malformations (AVMs). Significant visceral AVMs reduce the systemic vascular resistance: supra-normal cardiac outputs are required to maintain arterial blood pressure, and may result in significant pulmonary venous hypertension. Secondly, bleeding from nasal and gastrointestinal telangiectasia leads to iron losses of such magnitude that in most cases, diet is insufficient to meet the ‘hemorrhage adjusted iron requirement.’ Resultant iron deficiency restricts erythropoiesis, leading to anemia and further increases in cardiac output. Low iron levels are also associated with venous and arterial thromboses, elevated Factor VIII, and increased platelet aggregation to circulating 5HT (serotonin). Third, recent data highlight that reduced oxygenation of blood due to pulmonary AVMs results in a graded erythrocytotic response to maintain arterial oxygen content, and higher stroke volumes and/or heart rates to maintain oxygen delivery. Finally, HHT-independent factors such as diet, pregnancy, sepsis and other intercurrent illnesses also influence vascular structures, hemorrhage, and iron handling in HHT patients. These considerations emphasize the complexity of mechanisms that impact on vascular structures in HHT, and also offer opportunities for targeted therapeutic approaches

    Old Girod Street Cemetary

    Get PDF

    A grain of sand or a handful of dust?

    Get PDF
    The recent paper by Girod et al (2013) analyses the implications of stringent global GHG mitigation targets for the intensities of, inter alia , broad consumption categories like food, shelter and transport. This type of scenario modeling analysis and inverse reasoning helps us to better understand the potential or required contribution of changes in consumption patterns to mitigation. This is welcome because while there is a growing literature on the behavioral and consumption dimensions of mitigation, there is still no widely accepted framework for studying systematically the interactions between supply and demand, behavior and technology, production and consumption. So we are left with the question: what do we need to do exactly to stabilize GHG concentrations? Intuitively, we take our cue from Aristotelian logic: if A implies B, then in order to avoid B we had better prevent A. At this level it is clear that we need either to decarbonize our energy systems to start with, or to suck out CO _2 from the atmosphere. When multiple causes are at work, however, our neat Aristotelian picture is no longer appropriate (Cartwright 2003). Leaving capturing and storage aside, we need to decarbonize our systems, but we also need to reduce the energy intensity, change our personal habits, eat less meat, use more public transportation, etc. What is the right balance between these factors? Can we do just one thing, say, eat less meat, but not another, and still achieve some pretty ambitious mitigation goals? In other words, what are necessary and what are sufficient sets of measures to reach these goals? Let us first look at the question of necessary measures. This gets tricky when applied to individual consumers: it is somewhat akin to the notorious question whether a heap of sand is still a heap when you take away one grain (Sainsbury 2011). If you are inclined to say yes, think once more. What happens when you take away another one, and another one, and another one, and so forth? Eventually you are forced call a single grain a heap. By a similar type of reasoning none of us consumers makes any difference individually. It is tempting to conclude that therefore consumption side mitigation is not sufficient. But it also does not really seem necessary in the strict sense of the word as long as some supply side measure can compensate for a demand side measure not taken. Thus each one of us could go on as before, as long as someone else or some technology is compensating for our own failure to change. To be sure, such elusive argument is, to say the least, not very helpful, but it highlights the difficulty to derive very specific courses of action from aggregate goals. So it takes a more prescriptive approach to get things going. The pragmatic mitigation wedge analysis by, e.g., Pacala and Socolow (2004) has highlighted that a relatively small number of dedicated and practicable measures is sufficient to achieve deep emission cuts, but the balance of these measures in the analyses is understandably somewhat arbitrary. Other analysis, based on Integrated Assessment Models (IAMs) has focused more specifically on the questions of where and when measures would be implemented in the most cost-effective manner. From such studies one can learn about carbon price trajectories, technology diffusion rates, and possibly about conditional probabilities for reaching targets over time. However, IAMs are rarely used to assess systematically the necessary or sufficient conditions for reaching a given target, and when they do the outcome often is—with the occasional exception—disappointingly generic. Moreover, the controversies arising from value-laden allocations derived from IAMs are well-known: in these models emissions are typically reduced where it (supposedly) can be done cheapest, i.e. in low-wage countries, or according to some burden sharing scheme. The allocation of mitigation over time is essentially determined by the magnitude of the discount rate and thus a valuation of future versus present expenditures. Refreshingly, Girod et al (2013) discuss a selection of allocation schemes across sectors, including consumers, that allow us to get an impression of the requirements and bounds for each of a set of stylized demand activities within the context of a plausible overall IAM story. Thus Girod et al (2013), make progress in addressing consumer behavior in the context of a wider set of activities contributing to GHG emissions and technological options to reduce these, without being committed to any particular allocation scheme. Further work will have to address issues raised by a recent study (Schweizer and Kriegler 2012) on the limitations of the scenario space in earlier IPCC assessments to avoid past omissions. Moreover, IAMs in general need to become more transparent and more responsive to the needs of stakeholders. They also need to be applied specifically to identify concrete incentives, such as co-benefits of mitigation (Wagner 2012) and mechanisms (beyond stylized carbon markets) that nudge us towards low emission pathways. References Cartwright N 2003 Hunting Causes and Using Them: Approaches in Philosophy and Economics 1st edn (Cambridge: Cambridge University Press) Girod B, Van Vuuren D P and Hertwich E G 2013 Global climate targets and future consumption level: an evaluation of the required GHG intensity Environ. Res. Lett. 8 014016 Pacala S and Socolow R 2004 Stabilization wedges: solving the climate problem for the next 50 years with current technologies Science 305 968–72 Sainsbury R M 2011 Paradoxes 3rd edn (Cambridge: Cambridge University Press) Schweizer V J and Kriegler E 2012 Improving environmental change research with systematic techniques for qualitative scenarios Environ. Res. Lett. 7 044011 Wagner F 2012 Mitigation here and now or there and then: the role of co-benefits Carbon Manag. 3 325–

    Diseño y Aprendizaje Infantil (Design And Childhood Learning)

    Get PDF
    Este artículo es presentado a la categoría temática “Diseño + Educación” del Cuarto Encuentro Internacional de Investigación en Diseño, en el formato de ponencia. Su objetivo es establecer un marco conceptual para ubicar el Diseño, así como al Diseñador, en los campos de la educación y el aprendizaje infantil, tratando de dar respuesta a la pregunta de investigación: ¿De qué manera el diseño puede servir como medio de aprendizaje en la educación de los niños

    Kinematics of the 1991 Randa rockslides (Valais, Switzerland)

    No full text
    International audienceAbout 22 mio m3 of rock fell from a cliff near the village of Randa (10 km north of Zermatt, Switzerland) on 18 April 1991. A second retrogressive rockslide of about 7 mio m3 followed on 9 May 1991. At present, a rock mass situated above the scarp is still slowly moving toward the valley, involving several mio m 3 of rock. A kinematic approach to study of this well-documented rockslide was made "a posteriori" in order to identify the parameters relevant to the detection of such failures involving large volumes of rock. A 3-D model of the pre-rockslide geometry is presented, and is used to interpret the geostructural, hydrogeological, and chronological data. The steepness of the cliff, the massive lithology (mainly orthogneiss), the location on a topographic ridge outcropping at the confluence between a glacial cirque and the main valley, and the existence of previous events of instability were the preexisting field conditions that affected the stability of the area. The structural cause of instability was a 30 dipping, more than 500-m-long, persistent fault, which cut the base of the rock face. Together with a steeply dipping set of persistent joints, this basal discontinuity delimited a 20- mio-m 3 rock block, with a potential sliding direction approximately parallel to the axis of the valley. To the North, the fractures delimiting the unstable mass were less persistent and separated by rock bridges; this rock volume acted as key block. This topographic and structural configuration was freed from glacier support about 15 000 years BP. The various mechanisms of degradation that led to the final loss of equilibrium required various amounts of time. During the late-and post-glacial periods, seismic activity and weathering of the orthogneiss along the fissure network due to infiltration of meteoric water, joined to reduce the mechanical resistance of the sliding surfaces and the rocks bridges. In addition, crystallisation of clay minerals due to mineralogical alteration of the fault gouge accumulated along the sliding surface, reducing its angle of internal friction, and sealing the surface against water circulation. Once this basal fracture began to act as an aquiclude, the seasonal increase of the hydraulic head in the fissures promoted hydraulic fracturing on the highly stressed edges of the key block. Acceleration of this mechanical degradation occurred during the 20-year period before the 1991 rockslides, giving rise to an increasing rockfall activity, that constituted a forewarning sign. The final triggering event corresponded to a snow-melt period with high water table, leading to fracturation around the key block. On 18 April 1991, the key block finally failed, allowing subsidiary orthogneiss blocks to slide. They fell in turn over a period of several hours. The 9 May 1991, rockslide was the first of a series of expected future retrogressive reequilibrium stages of the very fractured and decompressed paragneisses, which lie on the orthogneiss base cut by the 18 April event

    The Efficacy of Simulation as a Pedagogy in Facilitating Pre-Service Teachers’ Learning About Emotional Self-Regulation and its Relevance to the Teaching Profession

    Get PDF
    This study was undertaken in response to the imperative of teacher education courses incorporating National Professional Standards for Teachers, in particular Standard 7, which deals with the professional engagement of teachers (AITSL, 2011). It aimed to evaluate the efficacy of simulation and active recall as a learner-centred pedagogy in facilitating pre-service teachers’ learning about their capacity to self-regulate emotionally and its relevance to the profession. A simulated ‘critical incident’ was used in a lecture to guide students (n=106) to analyse and understand their emotional responses to an altercation between the lecturer and a colleague. The evaluation involved both quantitative and qualitative data collection. The study generated six useful insights associated with the efficacy of simulation pedagogy and revealed convincingly that this pedagogy can engage students actively in learning about the importance of emotional self-regulation in relation to their professional role as a teacher

    Microscopic Calculation of Fission Fragment Energies for the 239Pu(nth,f) Reaction

    Full text link
    corecore